25 research outputs found

    Making AI Meaningful Again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    Ontologies of common sense, physics and mathematics

    Full text link
    The view of nature we adopt in the natural attitude is determined by common sense, without which we could not survive. Classical physics is modelled on this common-sense view of nature, and uses mathematics to formalise our natural understanding of the causes and effects we observe in time and space when we select subsystems of nature for modelling. But in modern physics, we do not go beyond the realm of common sense by augmenting our knowledge of what is going on in nature. Rather, we have measurements that we do not understand, so we know nothing about the ontology of what we measure. We help ourselves by using entities from mathematics, which we fully understand ontologically. But we have no ontology of the reality of modern physics; we have only what we can assert mathematically. In this paper, we describe the ontology of classical and modern physics against this background and show how it relates to the ontology of common sense and of mathematics

    Making AI meaningful again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.Comment: 23 pages, 1 Tabl

    Making AI meaningful again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    The Birth of Ontology and the Directed Acyclic Graph

    Get PDF
    Barry Smith recently discussed the diagraphs of book eight of Jacob Lorhard’s Ogdoas scholastica under the heading “birth of ontology” (Smith, 2022; this issue). Here, I highlight the commonalities between the original usage of diagraphs in the tradition of Ramus for didactic purposes and the the usage of their present-day successors–modern ontologies–for computational purposes. The modern ideas of ontology and of the universal computer were born just two generations apart in the breakthrough century of instrumental reason

    Certifiable AI

    Get PDF
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications

    Causality as a partitioning principle for upper ontologies

    Get PDF
    In his “Bridging mainstream and formal ontology”, Augusto (2021) gives an excellent analysis of Dietrich von Freiberg’s idea of using causality as a partitioning principle for upper ontologies. For this Dietrich’s notion of extrinsic principles is crucial. The question whether causation can and indeed should be used as a partitioning principle for ontologies is discussed using mathematics and physics as examples

    Permutation-validated principal components analysis of microarray data

    Get PDF
    BACKGROUND: In microarray data analysis, the comparison of gene-expression profiles with respect to different conditions and the selection of biologically interesting genes are crucial tasks. Multivariate statistical methods have been applied to analyze these large datasets. Less work has been published concerning the assessment of the reliability of gene-selection procedures. Here we describe a method to assess reliability in multivariate microarray data analysis using permutation-validated principal components analysis (PCA). The approach is designed for microarray data with a group structure. RESULTS: We used PCA to detect the major sources of variance underlying the hybridization conditions followed by gene selection based on PCA-derived and permutation-based test statistics. We validated our method by applying it to well characterized yeast cell-cycle data and to two datasets from our laboratory. We could describe the major sources of variance, select informative genes and visualize the relationship of genes and arrays. We observed differences in the level of the explained variance and the interpretability of the selected genes. CONCLUSIONS: Combining data visualization and permutation-based gene selection, permutation-validated PCA enables one to illustrate gene-expression variance between several conditions and to select genes by taking into account the relationship of between-group to within-group variance of genes. The method can be used to extract the leading sources of variance from microarray data, to visualize relationships between genes and hybridizations and to select informative genes in a statistically reliable manner. This selection accounts for the level of reproducibility of replicates or group structure as well as gene-specific scatter. Visualization of the data can support a straightforward biological interpretation

    Unsterblichkeit 2.0

    Get PDF
    Das in diesem Aufsatz vorgebrachte Argumentationsmuster hat folgende Schritte: 1. Der menschliche Geist ist vom Körper nicht trennbar, sie bilden ein Kontinuum. 2. Unser Bewusstsein und alle darauf aufbauenden geistigen PhÀnomene sind die Emanation eines materiellen Prozesses, den ein komplexes System verursacht. 3. Komplexe Systeme lassen sich mathematisch nicht modellieren und nicht kausal verstehen. 4. Computer sind Turing-Maschinen. Sie können nur mathematische Modelle berechnen. Es wird niemals Hyper-Turing Maschinen geben, und wenn es sie gÀbe, könnten sie auch nur mathematische Modelle berechnen. 5. Es ist nicht möglich, den Körper als Substrat des Geistes durch einen Computer zu ersetzen. Die digitale Unsterblichkeit ist demzufolge ein Ding der Unmöglichkeit

    Why machines do not understand: A response to SĂžgaard

    Get PDF
    Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, SĂžgaard has argued in his "Understanding models understanding language" (2022) for a thesis of this sort. His idea is that (1) where there is semantics there is also understanding and (2) machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics. We show that he goes wrong because he pays insufficient attention to the difference between language as used by humans and the sequences of inert symbols which arise when language is stored on hard drives or in books in libraries
    corecore